sample input
Augmenting Greybox Fuzzing with Generative AI
Hu, Jie, Zhang, Qian, Yin, Heng
In recent years, fuzz testing has emerged as an effective technique for testing software systems. For example, fuzz testing has been remarkably successful in uncovering critical security bugs in applications such as Chrome web-browser [1] and SQLLite database [11]. Generally, fuzz testing runs a program with seed inputs, mutates the previous inputs to improve a given guidance metric such as branch coverage, and repeats this cycle of input mutation and the target program execution. During the fuzzing process, we often execute the target program with generated large amount of test cases and monitor the runtime behavior to find vulnerabilities. For that, it is essential to generate test cases that effectively cover a wide range of execution paths and program behaviors. This comprehensive coverage enables thorough exploration of the program's functionality and helps uncover potential vulnerabilities or issues. The simplicity of fuzzing has made it a de-facto testing procedure for large-scale software systems; however, its effectiveness is based on an inherent yet oversighted assumption: a set of arbitrary input mutations is likely to yield meaningful inputs. In fact, our extensive experience suggests that this assumption often does not hold for most software systems that take highly structured data as inputs.
- North America > United States > California > Riverside County > Riverside (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Europe > France > Occitanie > Hérault > Montpellier (0.04)
- Information Technology > Software (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.65)
Predicting Execution Time of Computer Programs Using Sparse Polynomial Regression
Predicting the execution time of computer programs is an important but challenging problem in the community of computer systems. Existing methods require experts to perform detailed analysis of program code in order to construct predictors or select important features. We recently developed a new system to automatically extract a large number of features from program execution on sample inputs, on which prediction models can be constructed without expert knowledge. In this paper we study the construction of predictive models for this problem. We propose the SPORE (Sparse POlynomial REgression) methodology to build accurate prediction models of program performance using feature data collected from program execution on sample inputs.
Predicting Execution Time of Computer Programs Using Sparse Polynomial Regression
Huang, Ling, Jia, Jinzhu, Yu, Bin, Chun, Byung-gon, Maniatis, Petros, Naik, Mayur
Predicting the execution time of computer programs is an important but challenging problem in the community of computer systems. Existing methods require experts to perform detailed analysis of program code in order to construct predictors or select important features. We recently developed a new system to automatically extract a large number of features from program execution on sample inputs, on which prediction models can be constructed without expert knowledge. In this paper we study the construction of predictive models for this problem. We propose the SPORE (Sparse POlynomial REgression) methodology to build accurate prediction models of program performance using feature data collected from program execution on sample inputs.
Neuro-Optimization: Learning Objective Functions Using Neural Networks
Jeon, Younghan, Lee, Minsik, Choi, Jin Young
Mathematical optimization is widely used in various research fields. With a carefully-designed objective function, mathematical optimization can be quite helpful in solving many problems. However, objective functions are usually hand-crafted and designing a good one can be quite challenging. In this paper, we propose a novel framework to learn the objective function based on a neural net-work. The basic idea is to consider the neural network as an objective function, and the input as an optimization variable. For the learning of objective function from the training data, two processes are conducted: In the inner process, the optimization variable (the input of the network) are optimized to minimize the objective function (the network output), while fixing the network weights. In the outer process, on the other hand, the weights are optimized based on how close the final solution of the inner process is to the desired solution. After learning the objective function, the solution for the test set is obtained in the same manner of the inner process. The potential and applicability of our approach are demonstrated by the experiments on toy examples and a computer vision task, optical flow.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)